Biological Imaging
◐ Cambridge University Press (CUP)
All preprints, ranked by how well they match Biological Imaging's content profile, based on 15 papers previously published here. The average preprint has a 0.01% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.
Preissinger, K.; Kezsmarki, I.; Török, J.
Show abstract
Due to climate change and the COVID-19 pandemic, the number of malaria cases and deaths increased between 2019 and 2020 [1]. Reversing this trend and eliminating malaria worldwide requires improvements in malaria diagnosis, in which artificial intelligence (AI) has recently been demonstrated to have a great potential. Here, we describe an AI-based approach that boosts the performance of light (LM), atomic force (AFM) and fluorescence microscopy (FM)-based malaria diagnosis. As the main challenge, the stage-specific recognition of infected red blood cells (RBCs) usually requires large sets of microscopy images for training a neural network, which is difficult to obtain. Our tool, the Malaria Stage Classifier, provides a fast, high-accuracy recognition that works even with limited training sets due to a smart reduction of data dimension. Individual RBCs are extracted from an image, reduced to characteristic one-dimensional cross-sections, and classified. We show that our method is applicable to images recorded by various microscopy techniques. It is available as a software package at https://github.com/KatharinaPreissinger/Malaria_stage_classifier and can be used within a python environment. Technical support is provided by the corresponding author (katharina.preissinger@physik.uni-augsburg.de). Author summaryThe Malaria Stage Classifier is a software helping the user to detect and stage RBCs infected with malaria. Accurate recognition of malaria infected RBCs still imposes a challenge in endemic regions, as it is time-consuming and subjective. These deficiencies can be overcome by autonomous computer assisted recognition using neural networks (NNs). The Malaria Stage Classifier offers a user-friendly interface for the stage-specific classification of malaria infected RBCs into four categories--healthy ones and three classes of infected ones according to the parasite age. The use of data reduction, which forms the central element of the Malaria Stage Classifier, allows for a fast and accurate classification of RBCs. It is applicable for light, atomic force, and fluorescence microscopy images and allows for retraining the implemented NN with new images. Our simple concept further has the potential to be generalised for the classification of other cells or objects.
Pizzagalli, D. U.; Thelen, M.; Gonzalez, S. F.; Krause, R.
Show abstract
2-photon intravital microscopy (2P-IVM) is a key technique to investigate cell migration and cell-to-cell interactions in organs and tissues of living organisms. Focusing on immunology, 2P-IVM allowed recording videos of leukocytes during the immune response, highlighting unprecedented mechanisms of the immune system. However, the automatic analysis of the acquired videos remains challenging and poorly reproducible. In fact, both manual curation of results and tuning of bioimaging software parameters among different experiments, are required. One of the most difficult tasks for a user is transferring to a computer the knowledge on what a cell is and how it should appear with respect to the background, other objects, or other cell types. This is possibly due to the low specificity of acquisition channels which may include multiple cell populations and the presence of similar objects in the background. In this work, we propose a method based on semi-supervised machine learning to facilitate colocalization. In line with recently proposed approaches for pixel classification, the method requires the user to draw some lines on the cells of interest and some line on the other objects/background. These lines embed knowledge, not only on which pixel belongs to a class or which pixel belongs to another class but also on how pixels in the same object are connected. Hence, the proposed method exploits the information from the lines to create an additional imaging channel that is specific for the cells fo interest. The usage of this method increased tracking accuracy on a dataset of challenging 2P-IVM videos of leukocytes. Additionally, it allowed processing multiple samples of the same experiment keeping the same mathematical model.
Khawatmi, M.; Steux, Y.; Zourob, S.; Sailem, H. Z.
Show abstract
Intuitive visualisation of quantitative microscopy data is crucial for interpreting and discovering new patterns in complex bioimage data. Existing visualisation approaches, such as bar charts, scatter plots and heat maps, do not accommodate the complexity of visual information present in microscopy data. Here we develop ShapoGraphy, a first of its kind method accompanied by a user-friendly web-based application for creating interactive quantitative pictorial representations of phenotypic data and facilitating the understanding and analysis of image datasets (www.shapography.com). ShapoGraphy enables the user to create a structure of interest as a set of shapes. Each shape can encode different variables that are mapped to the shape dimensions, colours, symbols, and stroke features. We illustrate the utility of ShapoGraphy using various image data, including high dimensional multiplexed data. Our results show that ShapoGraphy allows a better understanding of cellular phenotypes and relationships between variables. In conclusion, ShopoGraphy supports scientific discovery and communication by providing a wide range of users with a rich vocabulary to create engaging and intuitive representations of diverse data types.
Gunawan, I.; Marsh, R.; Aggarwal, N.; Meijering, E.; Cox, S.; Lock, J. G.; Culley, S.
Show abstract
Image processing methods offer the potential to improve the quality of fluorescence microscopy data, allowing for image acquisition at lower, less phototoxic illumination doses. The training and evaluation of such methods is informed and driven by full-reference image quality metrics (IQMs); however, these metrics derive from applications to natural scene images, not fluorescence microscopy images. Here we investigate the response of IQMs to common properties of fluorescence microscopy data and whether IQMs are capable of reporting the biological information content of images. We find that IQM scores are biased by image content for both raw and processed microscopy data, and that improvements in IQM values reported after processing are not reliably correlated with performance in downstream analysis tasks. As common IQMs are unreliable proxies for guiding image processing developments in biological fluorescence microscopy, image processing performance should be benchmarked according to downstream analysis success.
Zehtabian, A.; Fuchs, J.; Eickholt, B.; Ewers, H.
Show abstract
Brain function emerges from a highly complex network of specialized cells that are interlinked by billions of synapses. The synaptic connectivity between neurons is established between the elongated processes of their axons and dendrites or, together, neurites. To establish these billions of often far-reaching connections, cellular neurites have to grow in highly specialized, cell-type dependent patterns covering often mm distances and connecting with thousands of other neurons. The outgrowth and branching of neurites are tightly controlled during development and are a commonly used functional readout of imaging in the neurosciences. Manual analysis of neuronal morphology from microscopy images, however, is very time intensive and error prone. Especially fully automated segmentation and classification of all neurites remain unavailable in open-source software. Here we present a standalone, GUI-based software for batch-quantification of neuronal morphology in fluorescence micrographs with minimal requirements for user interaction. Neurons are segmented using a Hessian-based algorithm to detect thin neurite structures combined with intensity- and shape-based detection of the cell body. To measure the number of branches in a neuron accurately, rather than just determining branch points, neurites are classified into axon, dendrites and their branches of increasing order by their length using a geodesic distance transform of the cell skeleton. The software was benchmarked against a large, published dataset and reproduced the phenotype observed after manual annotation before. Our tool promises greatly accelerated and improved morphometric studies of neuronal morphology by allowing for consistent and automated analysis of large datasets.
Benitez-Jones, M. X.; Keegan, S.; Jamshahi, S.; Fenyo, D.
Show abstract
Background53BP1 foci are reflective of DNA double-strand break formation and have been used as radiation markers. Manual focus counting, while prone to bias and time constraints, remains the most accurate mode of detecting 53BP1 foci. Several studies have pursued automated focus detection to replace manual methods. Deep learning, spatial 3D images, and segmentation techniques are main components of the highest performing automated methods. While these approaches have achieved promising results regarding accurate focus detection and cell classification, they are not compatible with time-sensitive large-scale applications due to their demand for long run times, advanced microscopy, and computational resources. Further, segmentation of overlapping foci in 2D images has the potential to represent focus morphologies inaccurately. ResultsTo overcome these limitations, we developed a novel method to classify 2D fluorescence microscopy images of 53BP1 foci. Our approach consisted of three key features: (1) general 53BP1 focus classes, (2) varied parameter space composed of properties from individual foci and their respective Fourier transform, and (3) widely-available machine learning classifiers. We identified four main focus classes, which consisted of blurred foci and three levels of overlapping foci. Our parameter space for the training focus library, composed of foci formed by fluorescently-tagged BP1-2, showed a wide correlation range between variables which was validated using a publicly-available library of immunostained 53BP1 foci. Random forest achieved one of the highest and most stable performances for binary and multiclass problems, followed by a support vector machine and k-nearest neighbors. Specific metrics impacted the classification of blurred and low overlap foci for both train and test sets. ConclusionsOur method classified 53BP1 foci across separate fluorescent markers, resolutions, and damage-inducing methods, using off-the-shelf machine learning classifiers, a diverse parameter space, and well-defined focus classes.
Lee, C. T.; Laughlin, J. G.; Angliviel de La Beaumelle, N.; Amaro, R.; McCammon, J. A.; Ramamoorthi, R.; Holst, M. J.; Rangamani, P.
Show abstract
ObjectiveRecent advances in electron microscopy have, for the first time, enabled imaging of single cells in 3D at a nanometer length scale resolution. An uncharted frontier for in silico biology is the ability to simulate cellular processes using these observed geometries. However, this will require a system for going from EM images to 3D volume meshes which can be used in finite element simulations. MethodsIn this paper, we develop an end-to-end pipeline for this task by adapting and extending computer graphics mesh processing and smoothing algorithms. Our workflow makes use of our recently rewritten mesh processing software, GAMer 2, which implements several mesh conditioning algorithms and serves as a platform to connect different pipeline steps. ResultsWe apply this pipeline to a series of electron micrographs of dendrite morphology explored at three different length scales and show that the resultant meshes are suitable for finite element simulations. ConclusionOur pipeline, which consists of free and open-source community driven tools, is a step towards routine physical simulations of biological processes in realistic geometries. SignificanceWe posit that a new frontier at the intersection of computational technologies and single cell biology is now open. Innovations in algorithms to reconstruct and simulate cellular length scale phenomena based on emerging structural data will enable realistic physical models and advance discovery.
Ait Laydi, A.; Cueff, L.; Crespo, M.; El Mourabit, Y.; Bouvrais, H.
Show abstract
BackgroundSegmenting cytoskeletal filaments in microscopy images is essential for studying their roles in cellular processes such as cell division and intracellular transport. However, this task is highly challenging due to the fine, densely packed, and intertwined nature of these structures. Imaging limitations--noise, low contrast, and uneven fluorescence--further complicate analysis. While deep learning has advanced segmentation of large, well-defined biological structures, its performance often degrades under such adverse conditions. Additional challenges include obtaining precise annotations for curvilinear structures and managing severe class imbalance during training. ResultsWe introduce a novel noise-adaptive attention mechanism that extends the Squeeze-and-Excitation (SE) module to dynamically adjust to varying noise levels. Integrated into a U-Net decoder with residual encoder blocks, this yields ASE_Res_UNet, a lightweight yet high-performance model. To address annotation challenges, we developed a synthetic dataset generation strategy that ensures accurate annotations of fine filaments in noisy images, producing a synthetic dataset with two difficulty levels for segmentation benchmarking. We systematically evaluated loss functions and metrics to mitigate class imbalance, ensuring robust performance assessment. ASE_Res_UNet effectively segmented microtubules in noisy synthetic images, outperforming its ablated variants. It also demonstrated superior segmentation compared to models with alternative attention mechanisms or distinct architectures, while requiring fewer parameters, making it efficient for resource-constrained environments. Evaluation on a newly curated real microscopy dataset and a recently reannotated dataset highlighted ASE_Res_UNets effectiveness in segmenting microtubules beyond synthetic images. For these datasets, ASE_Res_UNet was competitive with a recent synthetic data-driven approach that shares two cytoskeleton pretrained models. Importantly, ASE_Res_UNet generalised well to other curvilinear structures (blood vessels and nerves) across diverse imaging conditions. ConclusionsThis work advances microtubule segmentation through three key contributions: (1) Providing two benchmark datasets (synthetic and real), addressing a critical gap in standardised evaluation resources for this task; (2) Introducing ASE_Res_UNet, a lightweight yet robust model combining noise-adaptive attention with residual learning; (3) Validating competitive performance across synthetic and real microscopy data. Additionally, we demonstrated generalisation to diverse curvilinear structures, showcasing potential for broader applications in biological research and medical diagnosis.
Gupta, A.; Moses, A.; Lu, A. X.
Show abstract
Deep learning models are widely used to extract feature representations from microscopy images. While these models are used for single-cell analyses, such as studying single-cell heterogeneity, they typically operate on image crops centered on individual cells with background information present, such as other cells, and it remains unclear to what extent the conclusions of single-cell analyses may be altered by this. In this paper, we introduce a novel evaluation framework that directly tests the robustness of crop-based models to background information. We create synthetic single-cell crops where the center cells localization is fixed and the background is swapped--e.g., with backgrounds from other protein localizations. We measure how different backgrounds affect localization classification performance using model-extracted features. Applying this framework to three leading models for single-cell microscopy for analyzing yeast protein localization, we find that all lack robustness to background cells. Localization classification accuracy drops by up to 15.8% when background cells differ in localization from the center cell compared to when the localization is the same. We further show that this lack of robustness can affect downstream biological analyses, such as the task of estimating proportions of cells for proteins with single-cell heterogeneity in localization. Ultimately, our framework provides a concrete way to evaluate single-cell model robustness to background information and highlights the importance of learning background-invariant features for reliable single-cell analysis.1
Robitaille, M. C.; Byers, J. M.; Christodoulides, J. A.; Raphael, M. P.
Show abstract
Machine learning algorithms hold the promise of greatly improving live cell image analysis by way of (1) analyzing far more imagery than can be achieved by more traditional manual approaches and (2) by eliminating the subjective nature of researchers and diagnosticians selecting the cells or cell features to be included in the analyzed data set. Currently, however, even the most sophisticated model based or machine learning algorithms require user supervision, meaning the subjectivity problem is not removed but rather incorporated into the algorithms initial training steps and then repeatedly applied to the imagery. To address this roadblock, we have developed a self-supervised machine learning algorithm that recursively trains itself directly from the live cell imagery data, thus providing objective segmentation and quantification. The approach incorporates an optical flow algorithm component to self-label cell and background pixels for training, followed by the extraction of additional feature vectors for the automated generation of a cell/background classification model. Because it is self-trained, the software has no user-adjustable parameters and does not require curated training imagery. The algorithm was applied to automatically segment cells from their background for a variety of cell types and five commonly used imaging modalities - fluorescence, phase contrast, differential interference contrast (DIC), transmitted light and interference reflection microscopy (IRM). The approach is broadly applicable in that it enables completely automated cell segmentation for long-term live cell phenotyping applications, regardless of the input imagerys optical modality, magnification or cell type.
Keikhosravi, A.; Almansour, F.; Bohrer, C. H.; Fursova, N. A.; Guin, K.; Sood, V.; Misteli, T.; Larson, D. R.; Pegoraro, G.
Show abstract
High-throughput imaging (HTI) generates complex imaging datasets from a large number of experimental perturbations. Commercial HTI software for image analysis workflows does not allow full customization and adoption of new image processing algorithms in the analysis modules. While open-source HTI analysis platforms provide individual modules in the workflow, like nuclei segmentation, spot detection, or cell tracking, they are often limited in integrating novel analysis modules or algorithms. Here, we introduce the High-Throughput Image Processing Software (HiTIPS) to expand the range and customization of existing HTI analysis capabilities. HiTIPS incorporates advanced image processing and machine learning algorithms for automated cell and nuclei segmentation, spot signal detection, nucleus tracking, spot tracking, and quantification of spot signal intensity. Furthermore, HiTIPS features a graphical user interface that is open to integration of new algorithms for existing analysis pipelines and to adding new analysis pipelines through separate plugins. To demonstrate the utility of HiTIPS, we present three examples of image analysis workflows for high-throughput DNA FISH, immunofluorescence (IF), and live-cell imaging of transcription in single cells. Altogether, we demonstrate that HiTIPS is a user-friendly, flexible, and open-source HTI analysis platform for a variety of cell biology applications.
Andre, O.; Kumra Ahnlide, J.; Norlin, N.; Swaminathan, V.; Nordenfelt, P.
Show abstract
Light microscopy is a powerful single-cell technique that allows for quantitative spatial information at subcellular resolution. However, unlike flow cytometry and single-cell sequencing techniques, microscopy has issues achieving high-quality population-wide sample characterization while maintaining high resolution. Here, we present a general framework, data-driven microscopy (DDM), that uses population-wide cell characterization to enable data-driven high-fidelity imaging of relevant phenotypes. DDM combines data-independent and data-dependent steps to synergistically enhance data acquired using different imaging modalities. As proof-of-concept, we apply DDM with plugins for improved high-content screening and live adaptive microscopy. DDM also allows for easy correlative imaging in other systems with a plugin that uses the spatial relationship of the sample population for automated registration. We believe DDM will be a valuable approach for reducing human bias, increasing reproducibility, and placing singlecell characteristics in the context of the sample population when interpreting microscopy data, leading to an overall increase in data fidelity.
Andhari, M. D.; Rinaldi, G.; Nazari, P.; Shankar, G.; Dubroja Lakic, N.; Vets, J.; Ostyn, T.; Vanmechelen, M.; Decraene, B.; Arnould, A.; Mestdagh, W.; De Moor, B.; De Smet, F.; Bosisio, F. M.; Antoranz, A.
Show abstract
Fluorescent imaging has revolutionized biomedical research, enabling the study of intricate cellular processes. Multiplex immunofluorescent imaging has extended this capability, permitting the simultaneous detection of multiple markers within a single tissue section. However, these images are susceptible to a myriad of undesired artifacts, which compromise the accuracy of downstream analyses. Manual artifact removal is impractical given the large number of images generated in these experiments, necessitating automated solutions. Here, we present QUAL-IF-AI, a multi-step deep learning-based tool for automated artifact identification and management. We demonstrate the utility of QUAL-IF-AI in detecting four of the most common types of artifacts in fluorescent imaging: air bubbles, tissue folds, external artifacts, and out-of-focus areas. We show how QUAL-IF-AI outperforms state-of-the-art methodologies in a variety of multiplexing platforms achieving over 85% of classification accuracy and more than 0.6 Intersection over Union (IoU) across all artifact types. In summary, this work presents an automated, accessible, and reliable tool for artifact detection and management in fluorescent microscopy, facilitating precise analysis of multiplexed immunofluorescence images.
Dyer, J. D.; Brown, A. R.; Owen, A.; Metz, J.
Show abstract
Determining the relationship between biomarkers via fluorescence microscopy is a key step in the characterisation of cellular phenotypes. We define a simple distance-based measurement termed a perimeter distance mean (PDmean) which quantifies the relative proximity of objects in one fluorescent channel to objects in a second fluorescent channel in 2D or 3D microscopy datasets. PDmean measurements were able to accurately identify known changes in colocalisation in computer-generated and real-world microscopy datasets. We argue that this approach provides substantial advantages over currently used distance-based colocalisation analysis methods. We also introduce PyBioProx, an extensible open-source Python module and graphical user interface that produces PDmean measurements.
Gatenbee, C. D.; Baker, A.-M.; Prabhakaran, S.; Slebos, R. J. C.; Mandal, G.; Mulholland, E.; Leedham, S.; Conejo-Garcia, J. R.; Chung, C. H.; Robertson-Tessi, M.; Graham, T. A.; Anderson, A. R. A.
Show abstract
Spatial analyses can reveal important interactions between and among cells and their microenvironment. However, most existing staining methods are limited to a handful of markers per slice, thereby limiting the number of interactions that can be studied. This limitation is frequently overcome by registering multiple images to create a single composite image containing many markers. While there are several existing image registration methods for whole slide images (WSI), most have specific use cases. Here, we present the Virtual Alignment of pathoLogy Image Series (VALIS), a fully automated pipeline that opens, registers (rigid and/or non-rigid), and saves aligned slides in the ome.tiff format. VALIS has been tested with 273 immunohistochemistry (IHC) samples and 340 immunofluorescence (IF) samples, each of which contained between 2-69 images per sample. The registered WSI tend to have low error and are completed within a matter of minutes. In addition to registering slides, VALIS can also using the registration parameters to warp point data, such as cell centroids previously determined via cell segmentation and phenotyping. VALIS is written in Python and requires only few lines of code for execution. VALIS therefore provides a free, opensource, flexible, and simple pipeline for rigid and non-rigid registration of IF and/or IHC that can facilitate spatial analyses of WSI from novel and existing datasets.
Dworak, N. M.; Cooper, M. R.; Fox, J. W.; de Oliveira, A. K.
Show abstract
MotivationHigh-resolution biological imaging in spatial biology produces data in many proprietary formats. The lack of compatibility between these formats restricts reproducibility and analysis, creates access issues, and makes early data analysis a challenge. ResultsHere, we introduce a graphical user interface (GUI) application designed to convert proprietary image formats into standardized formats like OME-TIFF and TIFF which we have termed Rawtunda. This tool addresses the need for easy and efficient handling of large, complex imaging data generated in spatial biology and microscopy, facilitating data sharing, analysis, and long-term storage. Featuring an intuitive interface the application supports users in converting.mcd and.ndpi formats, generated from Image Mass Cytometry (IMC) and Digital Pathology scanners, respectively. This resource aims to improve interoperability of spatial biology datasets, streamline data management workflows, and promote reproducibility in imaging research and analysis, preserving crucial image metadata. The app ensures compatibility with downstream tools and is designed for both bioinformaticians and bench biologists without experience in coding. Availability and implementationThe software, the documentation, and examples are available as open-source a https://med.virginia.edu/spatial-biology-core/rawtunda/ under the Copywrite of University of Virginia.
Lippeveld, M.; Peralta, D.; Filby, A.; Saeys, Y.
Show abstract
Due to high resolution and throughput of modern image cytometry platforms, morphologically profiling generated datasets poses a significant computational challenge. Here, we present Scalable Cytometry Image Processing (SCIP), an image processing software aimed at running on distributed high performance computing infrastructure. SCIP is scalable, flexible, open-source and enables reproducible image processing. It performs projection, illumination correction, segmentation, background masking and extensive morphological profiling on various imaging types. We showcase SCIPs capabilities on three large-scale image cytometry datasets. First, we process an imaging flow cytometry (IFC) dataset of human white blood cells and show how the obtained features are used to classify cells into 8 cell types based on bright- and darkfield imagery. Secondly, we process an automated microscopy dataset of human white blood cells to divide them into cell types in an unsupervised manner. Finally, a high-content screening dataset of breast cancer cells is processed to predict the mechanism-of-action of a large set of compound treatments. The software can be installed from the PyPi repository. Its source code is available at https://github.com/ScalableCytometryImageProcessing/SCIP under the GNU General Public License version 3. It has been tested on Unix operating systems. Issues with the software can be submitted at https://github.com/ScalableCytometryImageProcessing/SCIP/issues. 1 Author SummaryCytometry is a field of biology that studies cells by measuring their characteristics. In image cytometry, this is done by acquiring images of cells. In order to gain biological insight from a set of images, an extensive amount of measurements are derived from them describing the cells they contain. These measurements include, for instance, a cells area, diameter, or the average brightness of the cell image. These measurements can then be analyzed using automated software tools to understand, for example, how cells respond to drug treatments, or how cells differ between a healthy and a diseased person. In this work, we present a novel software tool that is able to efficiently compute image measurements on large datasets of images. We do this by harnessing the power of high performance computing infrastructure. By enabling image cytometry researchers to make use of more computational power, they can more efficiently process complex and large datasets, paving the way to novel, fascinating biological discoveries.
Sims, Z.; Strgar, L.; Thirumalaisamy, D.; Heussner, R.; Thibault, G.; Chang, Y. H.
Show abstract
Identifying individual cells or nuclei is often the first step in the analysis of multiplex tissue imaging (MTI) data. Recent efforts to produce plug-and-play, end-to-end MTI analysis tools such as MCMICRO1- though groundbreaking in their usability and extensibility - are often unable to provide users guidance regarding the most appropriate models for their segmentation task among an endless proliferation of novel segmentation methods. Unfortunately, evaluating segmentation results on a users dataset without ground truth labels is either purely subjective or eventually amounts to the task of performing the original, time-intensive annotation. As a consequence, researchers rely on models pre-trained on other large datasets for their unique tasks. Here, we propose a methodological approach for evaluating MTI nuclei segmentation methods in absence of ground truth labels by scoring relatively to a larger ensemble of segmentations. To avoid potential sensitivity to collective bias from the ensemble approach, we refine the ensemble via weighted average across segmentation methods, which we derive from a systematic model ablation study. First, we demonstrate a proof-of-concept and the feasibility of the proposed approach to evaluate segmentation performance in a small dataset with ground truth annotation. To validate the ensemble and demonstrate the importance of our method-specific weighting, we compare the ensembles detection and pixel-level predictions - derived without supervision - with the datas ground truth labels. Second, we apply the methodology to an unlabeled larger tissue microarray (TMA) dataset, which includes a diverse set of breast cancer phenotypes, and provides decision guidelines for the general user to more easily choose the most suitable segmentation methods for their own dataset by systematically evaluating the performance of individual segmentation approaches in the entire dataset.
Theiss, M.; Heriche, J.-K.; Russell, C.; Helekal, D.; Soppitt, A.; Ries, J.; Ellenberg, J.; Brazma, A.; Uhlmann, V.
Show abstract
MotivationThe Nuclear Pore Complex (NPC) is the only passageway for macromolecules between nucleus and cytoplasm, and one of localization microscopys most important reference standards: it is massive and stereotypically arranged. The average architecture of NPC proteins has been resolved with pseudo-atomic precision, however observed NPC heterogeneities evidence a high degree of divergence from this average. Single Molecule Localization Microscopy (SMLM) images NPCs at protein-level resolution, whereupon image analysis software studies NPC variability. However the true picture of NPC variability is unknown. In quantitative image analysis experiments, it is thus difficult to distinguish intrinsically high SMLM noise from true variability of the underlying structure. ResultsWe introduce CIR4MICS ("ceramics", Configurable, Irregular Rings FOR MICroscopy Simulations), a pipeline that creates artificial datasets of structurally variable synthetic NPCs based on architectural models of the true NPC. Users can select one or more N- or C-terminally tagged NPC proteins, and simulate a wide range of geometric variations. We also represent the NPC as a spring-model such that arbitrary deforming forces, of user-defined magnitudes, simulate irregularly shaped variations. We provide an open-source simulation pipeline, as well as reference datasets of simulated human NPCs. Accompanying ground truth annotations allow to test the capabilities of image analysis software and facilitate a side-by-side comparison with real data. We demonstrate this by synthetically replicating a geometric analysis of real NPC radii and reveal that a wide range of simulated variability parameters can lead to observed results. Our simulator is therefore valuable to benchmark and develop image analysis methods, as well as to inform experimentalists about the requirements of hypothesis-driven imaging studies. AvailabilityCode: https://github.com/uhlmanngroup/cir4mics. Simulated data is available at BioStudies (Accession number S-BSST1058). Contacttheiss@ebi.ac.uk Supplementary informationSupplementary data are available at
Bouilhol, E.; Lefevre, E.; Barry, T.; Levet, F.; Beghin, A.; Viasnoff, V.; Galindo, X.; Galland, R.; Sibarita, J.-B.; Nikolski, M.
Show abstract
Automatic segmentation of nuclei in low-light microscopy images remains a difficult task, especially for high-throughput experiments where need for automation is strong. Low saliency of nuclei with respect to the background, variability of their intensity together with low signal-to-noise ratio in these images constitute a major challenge for mainstream algorithms of nuclei segmentation. In this work we introduce SalienceNet, an unsupervised deep learning-based method that uses the style transfer properties of cycleGAN to transform low saliency images into high saliency images, thus enabling accurate segmentation by downstream analysis methods, and that without need for any parameter tuning. We have acquired a novel dataset of organoid images with soSPIM, a microscopy technique that enables the acquisition of images in low-light conditions. Our experiments show that SalienceNet increased the saliency of these images up to the desired level. Moreover, we evaluated the impact of SalienceNet on segmentation for both Otsu thresholding and StarDist and have shown that enhancing nuclei with SalienceNet improved segmentation results using Otsu thresholding by 30% and using StarDist by 26% in terms of IOU when compared to segmentation of non-enhanced images. Together these results show that SalienceNet can be used as a common preprocessing step to automate nuclei segmentation pipelines for low-light microscopy images.